14 research outputs found

    A Deep Multi-View Learning Framework for City Event Extraction from Twitter Data Streams

    Get PDF
    Cities have been a thriving place for citizens over the centuries due to their complex infrastructure. The emergence of the Cyber-Physical-Social Systems (CPSS) and context-aware technologies boost a growing interest in analysing, extracting and eventually understanding city events which subsequently can be utilised to leverage the citizen observations of their cities. In this paper, we investigate the feasibility of using Twitter textual streams for extracting city events. We propose a hierarchical multi-view deep learning approach to contextualise citizen observations of various city systems and services. Our goal has been to build a flexible architecture that can learn representations useful for tasks, thus avoiding excessive task-specific feature engineering. We apply our approach on a real-world dataset consisting of event reports and tweets of over four months from San Francisco Bay Area dataset and additional datasets collected from London. The results of our evaluations show that our proposed solution outperforms the existing models and can be used for extracting city related events with an averaged accuracy of 81% over all classes. To further evaluate the impact of our Twitter event extraction model, we have used two sources of authorised reports through collecting road traffic disruptions data from Transport for London API, and parsing the Time Out London website for sociocultural events. The analysis showed that 49.5% of the Twitter traffic comments are reported approximately five hours prior to the authorities official records. Moreover, we discovered that amongst the scheduled sociocultural event topics; tweets reporting transportation, cultural and social events are 31.75% more likely to influence the distribution of the Twitter comments than sport, weather and crime topics

    Comparing machine learning clustering with latent class analysis on cancer symptoms' data

    Get PDF
    Symptom Cluster Research is a major topic in Cancer Symptom Science. In spite of the several statistical and clinical approaches in this domain, there is not a consensus on which method performs better. Identifying a generally accepted analytical method is important in order to be able to utilize and process all the available data. In this paper we report a secondary analysis on cancer symptom data, comparing the performance of five Machine Learning (ML) clustering algorithms in doing so. Based on how well they separate specific subsets of symptom measurements we select the best of them and proceed to compare its performance with the Latent Class Analysis (LCA) method. This analysis is a part of an ongoing study for identifying suitable Machine Learning algorithms to analyse and predict cancer symptoms in cancer treatment

    Implementing a system for the real-time risk assessment of patients considered for intensive care

    Get PDF
    BACKGROUND: Delay in identifying deterioration in hospitalised patients is associated with delayed admission to an intensive care unit (ICU) and poor outcomes. For the HAVEN project (HICF ref.: HICF-R9-524), we have developed a mathematical model that identifies deterioration in hospitalised patients in real time and facilitates the intervention of an ICU outreach team. This paper describes the system that has been designed to implement the model. We have used innovative technologies such as Portable Format for Analytics (PFA) and Open Services Gateway initiative (OSGi) to define the predictive statistical model and implement the system respectively for greater configurability, reliability, and availability. RESULTS: The HAVEN system has been deployed as part of a research project in the Oxford University Hospitals NHS Foundation Trust. The system has so far processed &gt;&#x2009;164,000 vital signs observations and&#x2009;&gt;&#x2009;68,000 laboratory results for &gt;&#x2009;12,500 patients and the algorithm generated score is being evaluated to review patients who are under consideration for transfer to ICU. No clinical decisions are being made based on output from the system. The HAVEN score has been computed using a PFA model for all these patients. The intent is that this score will be displayed on a graphical user interface for clinician review and response. CONCLUSIONS: The system uses a configurable PFA model to compute the HAVEN score which makes the system easily upgradable in terms of enhancing systems' predictive capability. Further system enhancements are planned to handle new data sources and additional management screens.</p

    Transductive transfer learning for computer vision.

    Get PDF
    Artificial intelligent and machine learning technologies have already achieved significant success in classification, regression and clustering. However, many machine learning methods work well only under a common assumption that training and test data are drawn from the same feature space and the same distribution. A real world applications is in sports footage, where an intelligent system has been designed and trained to detect score-changing events in a Tennis single match and we are interested to transfer this learning to either Tennis doubles game or even a more challenging domain such as Badminton. In such distribution changes, most statistical models need to be rebuilt, using newly collected training data. In many real world applications, it is expensive or even impossible to collect the required training data and rebuild the models. One of the ultimate goals of the open ended learning systems is to take advantage of previous experience/ knowledge in dealing with similar future problems. Two levels of learning can be identified in such scenarios. One draws on the data by capturing the pattern and regularities which enables reliable predictions on new samples. The other starts from an acquired source of knowledge and focuses on how to generalise it to a new target concept; this is also known as transfer learning which is going to be the main focus of this thesis. This work is devoted to a second level of learning by focusing on how to transfer information from previous learnings, exploiting it on a new learning problem with not supervisory information available for new target data. We propose several solutions to such tasks by leveraging over prior models or features. In the first part of the thesis we show how to estimate reliable transformations from the source domain to the target domain with the aim of reducing the dissimilarities between the source class-conditional distribution and a new unlabelled target distribution. We then later present a fully automated transfer learning framework which approaches the problem by combining four types of adaptation: a projection to lower dimensional space that is shared between the two domains, a set of local transformations to further increase the domain similarity, a classifier parameter adaptation method which modifies the learner for the new domain and a set of class-conditional transformations aiming to increase the similarity between the posterior probability of samples in the source and target sets. We conduct experiments on a wide range of image and video classification tasks. We test our proposed methods and show that, in all cases, leveraging knowledge from a related domain can improve performance when there are no labels available for direct training on the new target data

    Transductive transfer learning for action recognition in tennis games

    No full text
    This paper investigates the application of transductive transfer learning methods for action classification. The application scenario is that of off-line video annotation for retrieval. We show that if a classification system can analyze the unlabeled test data in order to adapt its models, a significant performance improvement can be achieved. We applied it for action classification in tennis games for train and test videos of different nature. Actions are described using HOG3D features and for transfer we used a method based on feature re-weighting and a novel method based on feature translation and scaling. 1

    The Effect of Fractional Inspired Oxygen Concentration on Early Warning Score Performance: a database analysis

    Get PDF
    Objectives To calculate fractional inspired oxygen concentration (FiO2) thresholds in ward patients and add these to the National Early Warning Score (NEWS). To evaluate the performance of NEWS-FiO2 against NEWS when predicting in-hospital death and unplanned intensive care unit (ICU) admission. Methods A multi-centre, retrospective, observational cohort study was carried out in five hospitals from two UK NHS Trusts. Adult admissions with at least one complete set of vital sign observations recorded electronically were eligible. The primary outcome measure was an ‘adverse event’ which comprised either in-hospital death or unplanned ICU admission. Discrimination was assessed using the Area Under the Receiver Operating Characteristic curve (AUROC). Results A cohort of 83,304 patients from a total of 271,363 adult admissions were prescribed oxygen. In this cohort, NEWS-FiO2 (AUROC 0.823, 95% CI 0.819–0.824) outperformed NEWS (AUORC 0.811, 95% CI 0.809–0.814) when predicting in-hospital death or unplanned ICU admission within 24 h of a complete set of vital sign observations. Conclusions NEWS-FiO2 generates a performance gain over NEWS when studied in ward patients requiring oxygen. This warrants further study, particularly in patients with respiratory disorders

    CityPulse: Large Scale Data Analytics Framework for Smart Cities

    Get PDF
    Our world and our lives are changing in many ways. Communication, networking, and computing technologies are among the most influential enablers that shape our lives today. Digital data and connected worlds of physical objects, people, and devices are rapidly changing the way we work, travel, socialize, and interact with our surroundings, and they have a profound impact on different domains,such as healthcare, environmental monitoring, urban systems, and control and management applications, among several other areas. Cities currently face an increasing demand for providing services that can have an impact on people’s everyday lives. The CityPulse framework supports smart city service creation by means of a distributed system for semantic discovery, data analytics, and interpretation of large-scale (near-)real-time Internet of Things data and social media data streams. To goal is to break away from silo applications and enable cross-domain data integration. The CityPulse framework integrates multimodal, mixed quality, uncertain and incomplete data to create reliable, dependable information and continuously adapts data processing techniques to meet the quality of information requirements from end users. Different than existing solutions that mainly offer unified views of the data, the CityPulse framework is also equipped with powerful data analytics modules that perform intelligent data aggregation, event detection, quality assessment, contextual filtering, and decision support. This paper presents the framework, describes ist components, and demonstrates how they interact to support easy development of custom-made applications for citizens. The benefits and the effectiveness of the framework are demonstrated in a use-case scenario implementation presented in this paper
    corecore